Skip to main content
Cornell University
We gratefully acknowledge support from the Simons Foundation, member institutions, and all contributors. Donate
arxiv logo > cs > arXiv:2303.13408

Help | Advanced Search

Computer Science > Computation and Language

(cs)
[Submitted on 23 Mar 2023 (v1), last revised 18 Oct 2023 (this version, v2)]

Title:Paraphrasing evades detectors of AI-generated text, but retrieval is an effective defense

Authors:Kalpesh Krishna, Yixiao Song, Marzena Karpinska, John Wieting, Mohit Iyyer
View a PDF of the paper titled Paraphrasing evades detectors of AI-generated text, but retrieval is an effective defense, by Kalpesh Krishna and 4 other authors
View PDF
Abstract:The rise in malicious usage of large language models, such as fake content creation and academic plagiarism, has motivated the development of approaches that identify AI-generated text, including those based on watermarking or outlier detection. However, the robustness of these detection algorithms to paraphrases of AI-generated text remains unclear. To stress test these detectors, we build a 11B parameter paraphrase generation model (DIPPER) that can paraphrase paragraphs, condition on surrounding context, and control lexical diversity and content reordering. Using DIPPER to paraphrase text generated by three large language models (including GPT3.5-davinci-003) successfully evades several detectors, including watermarking, GPTZero, DetectGPT, and OpenAI's text classifier. For example, DIPPER drops detection accuracy of DetectGPT from 70.3% to 4.6% (at a constant false positive rate of 1%), without appreciably modifying the input semantics.
To increase the robustness of AI-generated text detection to paraphrase attacks, we introduce a simple defense that relies on retrieving semantically-similar generations and must be maintained by a language model API provider. Given a candidate text, our algorithm searches a database of sequences previously generated by the API, looking for sequences that match the candidate text within a certain threshold. We empirically verify our defense using a database of 15M generations from a fine-tuned T5-XXL model and find that it can detect 80% to 97% of paraphrased generations across different settings while only classifying 1% of human-written sequences as AI-generated. We open-source our models, code and data.
Comments: NeurIPS 2023 camera ready (32 pages). Code, models, data available in this https URL
Subjects: Computation and Language (cs.CL); Cryptography and Security (cs.CR); Machine Learning (cs.LG)
Cite as: arXiv:2303.13408 [cs.CL]
  (or arXiv:2303.13408v2 [cs.CL] for this version)
  https://doi.org/10.48550/arXiv.2303.13408
arXiv-issued DOI via DataCite

Submission history

From: Kalpesh Krishna [view email]
[v1] Thu, 23 Mar 2023 16:29:27 UTC (675 KB)
[v2] Wed, 18 Oct 2023 02:29:55 UTC (691 KB)
Full-text links:

Access Paper:

    View a PDF of the paper titled Paraphrasing evades detectors of AI-generated text, but retrieval is an effective defense, by Kalpesh Krishna and 4 other authors
  • View PDF
  • TeX Source
view license
Current browse context:
cs.CL
< prev   |   next >
new | recent | 2023-03
Change to browse by:
cs
cs.CR
cs.LG

References & Citations

  • NASA ADS
  • Google Scholar
  • Semantic Scholar
export BibTeX citation Loading...

Bookmark

BibSonomy logo Reddit logo

Bibliographic and Citation Tools

Bibliographic Explorer (What is the Explorer?)
Connected Papers (What is Connected Papers?)
Litmaps (What is Litmaps?)
scite Smart Citations (What are Smart Citations?)
Which authors of this paper are endorsers? | Disable MathJax (What is MathJax?)
  • About
  • Help
  • contact arXivClick here to contact arXiv Contact
  • subscribe to arXiv mailingsClick here to subscribe Subscribe
  • Copyright
  • Privacy Policy
  • Web Accessibility Assistance
  • arXiv Operational Status